Convergence of Batch Split-Complex Backpropagation Algorithm for Complex-Valued Neural Networks
نویسندگان
چکیده
The batch split-complex backpropagation BSCBP algorithm for training complex-valued neural networks is considered. For constant learning rate, it is proved that the error function of BSCBP algorithm is monotone during the training iteration process, and the gradient of the error function tends to zero. By adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example is given to support the theoretical analysis.
منابع مشابه
Convergence of Split-Complex Backpropagation Algorithm with Momentum∗
This paper investigates a split-complex backpropagation algorithm with momentum (SCBPM) for complex-valued neural networks. Some convergence results for SCBPM are proved under relaxed conditions compared with existing results. The monotonicity of the error function during the training iteration process is also guaranteed. Two numerical examples are given to support the theoretical findings.
متن کاملOn the Appropriateness of Complex-Valued Neural Networks for Speech Enhancement
Although complex-valued neural networks (CVNNs) – networks which can operate with complex arithmetic – have been around for a while, they have not been given reconsideration since the breakthrough of deep network architectures. This paper presents a critical assessment whether the novel tool set of deep neural networks (DNNs) should be extended to complex-valued arithmetic. Indeed, with DNNs ma...
متن کاملComplex - Valued Neural Network in Image Recognition : A Study on the Effectiveness of Radial Basis Function
A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresh olds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and visio...
متن کاملConvergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks
The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to z...
متن کاملRecursive least-squares backpropagation algorithm for stop-and-go decision-directed blind equalization
Stop-and-go decision-directed (S-and-G-DD) equalization is the most primitive blind equalization (BE) method for the cancelling of intersymbol-interference in data communication systems. Recently, this scheme has been applied to complex-valued multilayer feedforward neural network, giving robust results with a lower mean-square error at the expense of slow convergence. To overcome this problem,...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009